The paper studies distributed Dictionary Learning (DL) problems where the learning task is distributed over a multi-agent network with time-varying (nonsymmetric) connectivity. This formulation is relevant, for instance, in Big Data scenarios where massive amounts of data are collected/stored in different spatial locations and it is unfeasible to aggregate and/or process all data in a fusion center, due to resource limitations, communication overhead or privacy considerations. We develop a general distributed algorithmic framework for the (nonconvex) DL problem and establish its asymptotic convergence. The new method hinges on Successive Convex Approximation (SCA) techniques coupled with i) a gradient tracking mechanism instrumental to locally estimate the missing global information; and ii) a consensus step, as a mechanism to distribute the computations among the agents. To the best of our knowledge, this is the first distributed algorithm with provable convergence for the DL problem and, more in general, bi-convex optimization problems over (time-varying) directed graphs.

Distributed dictionary learning / Daneshmand, Amir; Scutari, Gesualdo; Facchinei, Francisco. - STAMPA. - (2017), pp. 1001-1005. (Intervento presentato al convegno 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 tenutosi a Pacific Grove; United States nel 2016) [10.1109/ACSSC.2016.7869518].

Distributed dictionary learning

FACCHINEI, Francisco
2017

Abstract

The paper studies distributed Dictionary Learning (DL) problems where the learning task is distributed over a multi-agent network with time-varying (nonsymmetric) connectivity. This formulation is relevant, for instance, in Big Data scenarios where massive amounts of data are collected/stored in different spatial locations and it is unfeasible to aggregate and/or process all data in a fusion center, due to resource limitations, communication overhead or privacy considerations. We develop a general distributed algorithmic framework for the (nonconvex) DL problem and establish its asymptotic convergence. The new method hinges on Successive Convex Approximation (SCA) techniques coupled with i) a gradient tracking mechanism instrumental to locally estimate the missing global information; and ii) a consensus step, as a mechanism to distribute the computations among the agents. To the best of our knowledge, this is the first distributed algorithm with provable convergence for the DL problem and, more in general, bi-convex optimization problems over (time-varying) directed graphs.
2017
50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016
Signal Processing; Computer Networks and Communications
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Distributed dictionary learning / Daneshmand, Amir; Scutari, Gesualdo; Facchinei, Francisco. - STAMPA. - (2017), pp. 1001-1005. (Intervento presentato al convegno 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 tenutosi a Pacific Grove; United States nel 2016) [10.1109/ACSSC.2016.7869518].
File allegati a questo prodotto
File Dimensione Formato  
Daneshmand_Distributed_2017.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 220.85 kB
Formato Adobe PDF
220.85 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/955855
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? ND
social impact